146 research outputs found

    Efficient Compressive Sampling of Spatially Sparse Fields in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSN), i.e. networks of autonomous, wireless sensing nodes spatially deployed over a geographical area, are often faced with acquisition of spatially sparse fields. In this paper, we present a novel bandwidth/energy efficient CS scheme for acquisition of spatially sparse fields in a WSN. The paper contribution is twofold. Firstly, we introduce a sparse, structured CS matrix and we analytically show that it allows accurate reconstruction of bidimensional spatially sparse signals, such as those occurring in several surveillance application. Secondly, we analytically evaluate the energy and bandwidth consumption of our CS scheme when it is applied to data acquisition in a WSN. Numerical results demonstrate that our CS scheme achieves significant energy and bandwidth savings wrt state-of-the-art approaches when employed for sensing a spatially sparse field by means of a WSN.Comment: Submitted to EURASIP Journal on Advances in Signal Processin

    CLEVER: a cooperative and cross-layer approach to video streaming in HetNets

    Get PDF
    We investigate the problem of providing a video streaming service to mobile users in an heterogeneous cellular network composed of micro e-NodeBs (eNBs) and macro e-NodeBs (MeNBs). More in detail, we target a cross-layer dynamic allocation of the bandwidth resources available over a set of eNBs and one MeNB, with the goal of reducing the delay per chunk experienced by users. After optimally formulating the problem of minimizing the chunk delay, we detail the Cross LayEr Video stReaming (CLEVER) algorithm, to practically tackle it. CLEVER makes allocation decisions on the basis of information retrieved from the application layer aswell as from lower layers. Results, obtained over two representative case studies, show that CLEVER is able to limit the chunk delay, while also reducing the amount of bandwidth reserved for offloaded users on the MeNB, as well as the number of offloaded users. In addition, we show that CLEVER performs clearly better than two selected reference algorithms, while being very close to a best bound. Finally, we show that our solution is able to achieve high fairness indexes and good levels of Quality of Experience (QoE)

    Green compressive sampling reconstruction in IoT networks

    Get PDF
    In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering of the CS measurements, we focus on the energy efficiency of the signal reconstruction stage given the CS measurements. As a first novel contribution, we present an analysis of the energy consumption within the IoT network under two computing architectures. In the first one, reconstruction takes place within the IoT network and the reconstructed data are encoded and transmitted out of the IoT network; in the second one, all the CS measurements are forwarded to off-network devices for reconstruction and storage, i.e., reconstruction is off-loaded. Our analysis shows that the two architectures significantly differ in terms of consumed energy, and it outlines a theoretically motivated criterion to select a green CS reconstruction computing architecture. Specifically, we present a suitable decision function to determine which architecture outperforms the other in terms of energy efficiency. The presented decision function depends on a few IoT network features, such as the network size, the sink connectivity, and other systems’ parameters. As a second novel contribution, we show how to overcome classical performance comparison of different CS reconstruction algorithms usually carried out w.r.t. the achieved accuracy. Specifically, we consider the consumed energy and analyze the energy vs. accuracy trade-off. The herein presented approach, jointly considering signal processing and IoT network issues, is a relevant contribution for designing green compressive sampling architectures in IoT networks

    Blind Image Deblurring Driven by Nonlinear Processing in the Edge Domain

    Get PDF
    This work addresses the problem of blind image deblurring, that is, of recovering an original image observed through one or more unknown linear channels and corrupted by additive noise. We resort to an iterative algorithm, belonging to the class of Bussgang algorithms, based on alternating a linear and a nonlinear image estimation stage. In detail, we investigate the design of a novel nonlinear processing acting on the Radon transform of the image edges. This choice is motivated by the fact that the Radon transform of the image edges well describes the structural image features and the effect of blur, thus simplifying the nonlinearity design. The effect of the nonlinear processing is to thin the blurred image edges and to drive the overall blind restoration algorithm to a sharp, focused image. The performance of the algorithm is assessed by experimental results pertaining to restoration of blurred natural images

    Protein-Protein Interaction Prediction via Graph Signal Processing

    Get PDF
    This paper tackles the problem of predicting the protein-protein interactions that arise in all living systems. Inference of protein-protein interactions is of paramount importance for understanding fundamental biological phenomena, including cross-species protein-protein interactions, such as those causing the 2020-21 pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Furthermore, it is relevant also for applications such as drug repurposing, where a known authorized drug is applied to novel diseases. On the other hand, a large fraction of existing protein interactions are not known, and their experimental measurement is resource consuming. To this purpose, we adopt a Graph Signal Processing based approach modeling the protein-protein interaction (PPI) network (a.k.a. the interactome) as a graph and some connectivity related node features as a signal on the graph. We then leverage the signal on graph features to infer links between graph nodes, corresponding to interactions between proteins. Specifically, we develop a Markovian model of the signal on graph that enables the representation of connectivity properties of the nodes, and exploit it to derive an algorithm to infer the graph edges. Performance assessment by several metrics recognized in the literature proves that the proposed approach, named GRAph signal processing Based PPI prediction (GRABP), effectively captures underlying biologically grounded properties of the PPI network

    RECONSTRUCTION OF COMPRESSIVELY SAMPLED TEXTURE IMAGES IN THE GRAPH-BASED TRANSFORM DOMAIN

    Get PDF
    ABSTRACT This paper addresses the problem of texture images recovery from compressively sampled measurements. Texture images hardly present a sparse, or even compressible, representation in transformed domains (e.g. wavelet) and are therefore difficult to deal with in the Compressive Sampling (CS) framework. Herein, we resort to the recently defined Graph-based transform (GBT), formerly introduced for depth map coding, as a sparsifying transform for classes of textures sharing the similar spatial patterns. Since GBT proves to be a good candidate for compact representation of some classes of texture, we leverage it for CS texture recovery. To this aim, we resort to a modified version of a state-of-the-art recovery algorithm to reconstruct the texture representation in the GBT domain. Numerical simulation results show that this approach outperforms state-of-the-art CS recovery algorithms on texture images

    Improving J-Divergence of Brain Connectivity States by Graph Laplacian Denoising

    Get PDF
    International audienceFunctional connectivity (FC) can be represented as a network, and is frequently used to better understand the neural underpinnings of complex tasks such as motor imagery (MI) detection in brain-computer interfaces (BCIs). However, errors in the estimation of connectivity can affect the detection performances. In this work, we address the problem of denoising common connectivity estimates to improve the detectability of different connectivity states. Specifically, we propose a graph signal processing based denoising algorithm that acts on the network graph Laplacian. Further, we derive a novel formulation of the Jensen divergence for the denoised Laplacian under different states. Numerical simulations on synthetic data show that denoising improves the Jensen divergence of connectivity patterns corresponding to different task conditions. Furthermore, we apply the Laplacian denoising technique to brain networks estimated from real EEG data recorded during MI-BCI experiments. A novel formulation of the J-divergence allows to quantify the distance between the FC networks in the motor imagery and resting states, as well as to understand the contribution of each Laplacian variable to the total J-divergence between two states. Experimental results on real MI-BCI EEG data demonstrate that the Laplacian denoising improves the separation of motor imagery and resting mental states, and it shortens the time interval required for connectivity estimation. We conclude that the approach shows promise for robust detection of connectivity states while being appealing for implementation in real-time BCI applications
    corecore